Skip to content

Implement Gemini API integration#3

Merged
heidi-dang merged 7 commits intomainfrom
implement-GEMINI-api
Jan 27, 2026
Merged

Implement Gemini API integration#3
heidi-dang merged 7 commits intomainfrom
implement-GEMINI-api

Conversation

@heidi-dang
Copy link
Owner

Summary\n- Add Gemini API integration and related UI changes\n- Merge main into this branch to include stable workflows and repo guards\n\n## Local checks\n- python3 -m compileall -q .\n- npm -C ui run lint\n- npm -C ui run build

@gemini-code-assist
Copy link

Summary of Changes

Hello @heidi-dang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the application's capabilities by integrating Google's Gemini API, offering users a choice of large language models for assistant chat. The changes involve implementing a new Python client for Gemini, adapting the chat session to conditionally use either Claude or Gemini, and updating the user interface and documentation to reflect this new option. Concurrently, the PR introduces robust deployment tooling, including Traefik integration and a simplified deployment script, which enhances the overall developer experience and operational stability.

Highlights

  • Gemini API Integration: The application now supports Google's Gemini API for assistant chat, leveraging its OpenAI-compatible endpoint. This allows users to choose Gemini as an alternative large language model for conversational interactions.
  • Conditional Model Selection: The core chat session logic has been updated to dynamically select between Claude and Gemini based on environment configuration. When Gemini is active, it operates in a chat-only mode without tool support, with coding agents still relying on Claude/Anthropic.
  • Deployment Enhancements: New Docker Compose configurations for Traefik reverse proxy and a comprehensive one-click deployment script have been added. These additions streamline the application's deployment process, including automatic TLS with Let's Encrypt and DuckDNS integration.
  • UI and Documentation Updates: The setup wizard in the user interface now includes a status indicator for Gemini configuration, and the README.md has been updated with clear instructions for setting up the Gemini API. A new DEVELOPMENT.md outlines the project's future roadmap, including plans for model selection UI and Gemini tooling guardrails.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces Gemini API integration for the assistant chat feature, a valuable addition. The implementation includes a new Gemini client, backend logic to switch between Claude and Gemini providers, and corresponding UI updates. While the overall structure is good, I've identified a couple of critical issues. The default Gemini base URL is incorrect in both the documentation and the client code, which will prevent the feature from working out-of-the-box. Additionally, the client incorrectly processes the streaming response from the API. I've provided specific suggestions to address these problems. I also noted a minor opportunity to simplify the code by removing a redundant parameter. The inclusion of a development roadmap and deployment scripts is a great enhancement for the project.

### Optional: Gemini API (assistant chat only)
- `GEMINI_API_KEY` (required)
- `GEMINI_MODEL` (optional, default `gemini-1.5-flash`)
- `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta/openai`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The default GEMINI_BASE_URL is incorrect. According to Google's documentation for the OpenAI compatibility layer, the base URL should be https://generativelanguage.googleapis.com/v1beta. The /openai suffix is not part of the official endpoint and will cause connection errors with the default configuration.

Suggested change
- `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta/openai`)
- `GEMINI_BASE_URL` (optional, default `https://generativelanguage.googleapis.com/v1beta`)

from openai import AsyncOpenAI

# Default OpenAI-compatible base URL for Gemini
DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The DEFAULT_GEMINI_BASE_URL is incorrect. The correct base URL for the Gemini API's OpenAI compatibility layer is https://generativelanguage.googleapis.com/v1beta. Using the current URL will result in failed API requests.

Suggested change
DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta/openai"
DEFAULT_GEMINI_BASE_URL = "https://generativelanguage.googleapis.com/v1beta"

Comment on lines +75 to +80
if delta and delta.content:
# delta.content is a list of content parts
for part in delta.content:
text = getattr(part, "text", None) or part.get("text") if isinstance(part, dict) else None
if text:
yield text

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for processing the streaming response is incorrect. For a streaming chat completion with the openai library, delta.content is a string containing the next chunk of text, not a list of parts. The current implementation iterates over the characters of this string but fails to extract any text, which will result in no output from the stream. This should be simplified to directly yield the content.

            if delta and delta.content:
                # The content from the streaming delta is a string, not a list of parts.
                yield delta.content

Comment on lines +434 to +438
async for text in stream_chat(
message,
system_prompt=self._system_prompt,
model=os.getenv("GEMINI_MODEL"),
):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model parameter passed to stream_chat is redundant. The stream_chat function already handles retrieving the model from the GEMINI_MODEL environment variable and applying a default. Removing this parameter simplifies the code and avoids duplicating the model resolution logic.

            async for text in stream_chat(
                message,
                system_prompt=self._system_prompt,
            ):

@heidi-dang heidi-dang merged commit e8ab8b8 into main Jan 27, 2026
3 checks passed
@heidi-dang heidi-dang deleted the implement-GEMINI-api branch January 27, 2026 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant